40 research outputs found

    full-FORCE: A Target-Based Method for Training Recurrent Networks

    Get PDF
    Trained recurrent networks are powerful tools for modeling dynamic neural computations. We present a target-based method for modifying the full connectivity matrix of a recurrent network to train it to perform tasks involving temporally complex input/output transformations. The method introduces a second network during training to provide suitable "target" dynamics useful for performing the task. Because it exploits the full recurrent connectivity, the method produces networks that perform tasks with fewer neurons and greater noise robustness than traditional least-squares (FORCE) approaches. In addition, we show how introducing additional input signals into the target-generating network, which act as task hints, greatly extends the range of tasks that can be learned and provides control over the complexity and nature of the dynamics of the trained, task-performing network.Comment: 20 pages, 8 figure

    Inferring brain-wide interactions using data-constrained recurrent neural network models

    Get PDF
    Behavior arises from the coordinated activity of numerous anatomically and functionally distinct brain regions. Modern experimental tools allow unprecedented access to large neural populations spanning many interacting regions brain-wide. Yet, understanding such large-scale datasets necessitates both scalable computational models to extract meaningful features of inter-region communication and principled theories to interpret those features. Here, we introduce Current-Based Decomposition (CURBD), an approach for inferring brain-wide interactions using data-constrained recurrent neural network models that directly reproduce experimentally-obtained neural data. CURBD leverages the functional interactions inferred by such models to reveal directional currents between multiple brain regions. We first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. We then apply CURBD to multi-region neural recordings obtained from mice during running, macaques during Pavlovian conditioning, and humans during memory retrieval to demonstrate the widespread applicability of CURBD to untangle brain-wide interactions underlying behavior from a variety of neural datasets

    Mapping quantitative trait loci (QTLs) associated with resistance to major pathotype-isolates of pearl millet downy mildew pathogen

    Get PDF
    Downy mildew (DM) caused by Sclerospora graminicola is the most devastating disease of pearl millet. It may lead to annual grain yield losses of up to ~80% and substantial deterioration of forage quality and production. The present study reports construction of the linkage map integrating simple sequence repeat (SSR) markers, for detection of quantitative trait loci (QTLs) associated withDMresistance in pearl millet. Amapping population comprising of 187 F8 recombinant inbred lines (RILs) was developed from the cross (ICMB 89111-P6 × ICMB 90111-P6). The RILs were evaluated for disease reaction at a juvenile stage in the greenhouse trials. Genotyping data was generated from 88 SSR markers on RILs and used to construct genetic linkage map comprising of 53 loci on seven linkage groups (LGs) spanning a total length of 903.8 cM with an average adjacent marker distance of 18.1 cM. Linkage group 1 (LG1; 241.1 cM) was found to be longest and LG3 the shortest (23.0 cM) in length. The constructed linkage map was used to detect five large effect QTLs for resistance to three different pathotype-isolates of S. graminicola from Gujarat (Sg445), Haryana (Sg519) and Rajasthan (Sg526) states of India. One QTL was detected for isolate Sg445 resistance, and two each for Sg519 and Sg526 resistance on LG4 with LOD scores ranging from 5.1 to 16.0, explaining a wide range (16.7% to 78.0%) of the phenotypic variation (R2). All the five co-localized QTLs on LG4 associated with the DM resistance to the three pathotype-isolates were contributed by the resistant parent ICMB 90111-P6. The QTLs reported here may be useful for the breeding programs aiming to develop DM resistant pearl millet cultivars with other desirable traits using genomic selection (GS) approaches

    Learning quadratic receptive fields from neural responses to natural stimuli

    No full text
    Models of neural responses to stimuli with complex spatiotemporal correlation structure often assume that neurons are selective for only a small number of linear projections of a potentially high-dimensional input. In this review, we explore recent modeling approaches where the neural response depends on the quadratic form of the input rather than on its linear projection, that is, the neuron is sensitive to the local covariance structure of the signal preceding the spike. To infer this quadratic dependence in the presence of arbitrary (e.g., naturalistic) stimulus distribution, we review several inference methods, focusing in particular on two information theory–based approaches (maximization of stimulus energy and of noise entropy) and two likelihood-based approaches (Bayesian spike-triggered covariance and extensions of generalized linear models). We analyze the formal relationship between the likelihood-based and information-based approaches to demonstrate how they lead to consistent inference. We demonstrate the practical feasibility of these procedures by using model neurons responding to a flickering variance stimulus

    Core of the method.

    No full text
    <p>(<b>a</b>) A general implementation is shown here. The stimuli are natural image clips which are pixel patches resized from a natural image database, as described in <a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0071959#pone.0071959-Tkaik1" target="_blank">[36]</a>. spikes are generated with a probability per time bin of from the model neuron by a thresholding the term, where the matrix is the receptive field of the neuron. (<b>b</b>) Mutual information between the spiking response of the model neuron and the quadratic stimulus projection is plotted as a function of the number of learning steps. Information, normalized by its value when , peaks at the learning step and then plateaus. The black dots on the trace denote the points at which we extract the initial, the intermediate and the optimal matrices for comparison. The maximally informative matrix reconstructed at the step, agrees well with , indicating convergence. For this implementation the step size at the start and at the end of the algorithm. (<b>c</b>) Root–mean–square (RMS) reconstruction error calculated as , is plotted as a function of the number of learning steps. This error decreases steadily until either the randomly initialized matrix (solid line) or the matrix initialized to the spike–triggered covariance matrix (dashed line) matches . If is initialized to the covariance matrix, the initial RMS error is smaller and the convergence is faster ( learning step) compared to that for a randomly initialized . For this example, both and are matrices and the black dot on the solid trace is at the same learning step as in panel (b).</p
    corecore